20 research outputs found
How to Handle Assumptions in Synthesis
The increased interest in reactive synthesis over the last decade has led to
many improved solutions but also to many new questions. In this paper, we
discuss the question of how to deal with assumptions on environment behavior.
We present four goals that we think should be met and review several different
possibilities that have been proposed. We argue that each of them falls short
in at least one aspect.Comment: In Proceedings SYNT 2014, arXiv:1407.493
Synthesizing Robust Systems with RATSY
Specifications for reactive systems often consist of environment assumptions
and system guarantees. An implementation should not only be correct, but also
robust in the sense that it behaves reasonably even when the assumptions are
(temporarily) violated. We present an extension of the requirements analysis
and synthesis tool RATSY that is able to synthesize robust systems from GR(1)
specifications, i.e., system in which a finite number of safety assumption
violations is guaranteed to induce only a finite number of safety guarantee
violations. We show how the specification can be turned into a two-pair Streett
game, and how a winning strategy corresponding to a correct and robust
implementation can be computed. Finally, we provide some experimental results.Comment: In Proceedings SYNT 2012, arXiv:1207.055
The Second Reactive Synthesis Competition (SYNTCOMP 2015)
We report on the design and results of the second reactive synthesis
competition (SYNTCOMP 2015). We describe our extended benchmark library, with 6
completely new sets of benchmarks, and additional challenging instances for 4
of the benchmark sets that were already used in SYNTCOMP 2014. To enhance the
analysis of experimental results, we introduce an extension of our benchmark
format with meta-information, including a difficulty rating and a reference
size for solutions. Tools are evaluated on a set of 250 benchmarks, selected to
provide a good coverage of benchmarks from all classes and difficulties. We
report on changes of the evaluation scheme and the experimental setup. Finally,
we describe the entrants into SYNTCOMP 2015, as well as the results of our
experimental evaluation. In our analysis, we emphasize progress over the tools
that participated last year.Comment: In Proceedings SYNT 2015, arXiv:1602.0078
The 3rd Reactive Synthesis Competition (SYNTCOMP 2016): Benchmarks, Participants & Results
We report on the benchmarks, participants and results of the third reactive
synthesis competition(SYNTCOMP 2016). The benchmark library of SYNTCOMP 2016
has been extended to benchmarks in the new LTL-based temporal logic synthesis
format (TLSF), and 2 new sets of benchmarks for the existing AIGER-based format
for safety specifications. The participants of SYNTCOMP 2016 can be separated
according to these two classes of specifications, and we give an overview of
the 6 tools that entered the competition in the AIGER-based track, and the 3
participants that entered the TLSF-based track. We briefly describe the
benchmark selection, evaluation scheme and the experimental setup of SYNTCOMP
2016. Finally, we present and analyze the results of our experimental
evaluation, including a comparison to participants of previous competitions and
a legacy tool.Comment: In Proceedings SYNT 2016, arXiv:1611.0717
Synthesizing adaptive test strategies from temporal logic specifications
Constructing good test cases is difficult and time-consuming, especially if the system under test is still under development and its exact behavior is not yet fixed. We propose a new approach to compute test strategies for reactive systems from a given temporal logic specification using formal methods. The computed strategies are guaranteed to reveal certain simple faults in every realization of the specification and for every behavior of the uncontrollable part of the system’s environment. The proposed approach supports different assumptions on occurrences of faults (ranging from a single transient fault to a persistent fault) and by default aims at unveiling the weakest one. We argue that such tests are also sensitive for more complex bugs. Since the specification may not define the system behavior completely, we use reactive synthesis algorithms with partial information. The computed strategies are adaptive test strategies that react to behavior at runtime. We work out the underlying theory of adaptive test strategy synthesis and present experiments for a safety-critical component of a real-world satellite system. We demonstrate that our approach can be applied to industrial specifications and that the synthesized test strategies are capable of detecting bugs that are hard to detect with random testing.European Commission through the Horizon2020 grant no. 64490